-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update create_deployment notebook #239
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest |
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"client.Configuration().host=\"http://localhost:8080\"" | ||
"config.load_kube_config()\n", | ||
"extensions = client.ExtensionsV1beta1Api()" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also changed this line in:
https://github.com/kubernetes-incubator/client-python/pull/242/files
I think it's better to load incluster config as the examples are intended to be run from inside a kubernetes cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pokoli do you mind if we merge this PR and close yours. This addresses more issues in the notebook. Once merged we can discuss more.
The general issue with these notebooks is to figure out a way to quickly test them.
I am tempted to add two steps in the notebooks with a try loop that loads in cluster and loads out of cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbohlool I am +1 on this, we need to improve things but it is a first step
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense to merge my PR into this. When I first created my PR I wasn't aware of this. So please go ahead.
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"client.Configuration().host=\"http://localhost:8080\"" | ||
"config.load_kube_config()\n", | ||
"extensions = client.ExtensionsV1beta1Api()" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pokoli do you mind if we merge this PR and close yours. This addresses more issues in the notebook. Once merged we can discuss more.
The general issue with these notebooks is to figure out a way to quickly test them.
I am tempted to add two steps in the notebooks with a try loop that loads in cluster and loads out of cluster.
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"client.Configuration().host=\"http://localhost:8080\"" | ||
"config.load_kube_config()\n", | ||
"extensions = client.ExtensionsV1beta1Api()" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbohlool I am +1 on this, we need to improve things but it is a first step
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"extensions = client.ExtensionsV1beta1Api()" | ||
"deployment = client.models.extensions_v1beta1_deployment.ExtensionsV1beta1Deployment()" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@djkonro actually you should change this to deployment = client.ExtensionsV1beta1Deployment()
286051a
to
55676fa
Compare
}, | ||
"outputs": [], | ||
"source": [ | ||
"#extensions.delete_namespaced_deployment(name='nginx-deployment', namespace='default', body=client.V1DeleteOptions(),grace_period_seconds = 56, propagation_policy = \"Background\")" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbohlool, Please I would like to get some help with this, as I am unable to achieve cascading deletion. Only the deployment is deleted and the dependent pods still stay alive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Search issues for this. I think somebody had the same problem recently and we discussed it there. Briefly, I think there is a bug upstream (in k8s itself) and you need to delete childs one by one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it is a bug. The kubectl delete deployment has a reapper that cascades deletion to pods, but the delete deployment API call itself does not do that.
@djkonro you can list pods with the label of the deployment and delete them based on that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ref: #162
Look like with 2.0.0, you should be able to do cascade delete?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbohlool, I have tried various options but it does not seem to delete dependents, even when I set propagationPolicy
to Background
or Foreground
, like in the function call below.
extensions.delete_namespaced_deployment(name='nginx-deployment', namespace='default', body=client.V1DeleteOptions(),grace_period_seconds = 56, propagation_policy = \"Background\")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even the grace_period_second
does not seem to take effect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sebgoa The pods are a part of a ReplicaSet, so they automatically get recreated when I delete them. I usually just tend to deleted the ReplicaSet in order to delete the pods, but I am looking for a better solution where I can delete them directly from the deployment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0c12ce2
to
9a837ee
Compare
7c9439a
to
96488f1
Compare
@sebgoa, I have added RollingUpdate and RollBack to the deployment notebook, as seen in https://kubernetes.io/docs/concepts/workloads/controllers/deployment/. Please let me know what you think. |
@djkonro there is no rolling update or rollback, I think you forgot to push the commit. |
@sebgoa Thanks for noticing the omission, sorry I did not realize the commit did not contain rolling update or rollback. |
55676fa
to
04ef076
Compare
55676fa
to
7d98dcc
Compare
/lgtm |
No description provided.